# Efficient matrix approximation
Bert Tiny Finetuned Sst2
This model is based on the BERT-tiny architecture and fine-tuned on the SST-2 dataset using the M-FAC second-order optimizer for text classification tasks.
Text Classification
Transformers

B
M-FAC
59
0
Bert Mini Finetuned Qnli
This model is a text classification model based on the BERT-mini architecture, fine-tuned on the QNLI dataset using the M-FAC second-order optimizer.
Text Classification
Transformers

B
M-FAC
11.93k
0
Bert Mini Finetuned Mnli
This model is based on the BERT-mini architecture, fine-tuned on the MNLI dataset using the M-FAC second-order optimizer for text classification tasks.
Text Classification
Transformers

B
M-FAC
290.56k
1
Bert Tiny Finetuned Qnli
This model is a BERT-tiny model fine-tuned on the QNLI dataset using the M-FAC second-order optimizer, demonstrating better performance compared to the traditional Adam optimizer.
Text Classification
Transformers

B
M-FAC
97.76k
0
Bert Mini Finetuned Sst2
This model is a BERT-mini model fine-tuned on the SST-2 dataset using the M-FAC second-order optimizer for text classification tasks.
Text Classification
Transformers

B
M-FAC
13.90k
0
Featured Recommended AI Models